Interpretable Quantum Advantage in Neural Sequence Learning

نویسندگان

چکیده

Quantum neural networks have been widely studied in recent years, given their potential practical utility and results regarding ability to efficiently express certain classical data. However, analytic date rely on assumptions arguments from complexity theory. Due this, there is little intuition as the source of expressive power quantum or for which classes data any advantage can be reasonably expected hold. Here, we study relative between a broad class network sequence models recurrent based Gaussian operations with non-Gaussian measurements. We explicitly show that contextuality an unconditional memory separation expressivity two model classes. Additionally, are able pinpoint this separation, use performance our introduced standard translation set exhibiting linguistic contextuality. In doing so, demonstrate outperform state art even practice.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

InterpNET: Neural Introspection for Interpretable Deep Learning

Humans are able to explain their reasoning. On the contrary, deep neural networks are not. This paper attempts to bridge this gap by introducing a new way to design interpretable neural networks for classification, inspired by physiological evidence of the human visual system’s inner-workings. This paper proposes a neural network design paradigm, termed InterpNET, which can be combined with any...

متن کامل

Demonstration of quantum advantage in machine learning

The main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer. Most problems with a proven quantum advantage involve the repeated use of a black box, or oracle, whose structure encodes the solution [1]. One measure of the algorithmic performance is the query complexity [2], i.e., the scaling of the number of oracle calls...

متن کامل

Sequence to Sequence Learning in Neural Network

Neural Network Elements. Deep learning is the name we use for “stacked neural networks”; that is, networks composed of several layers. The layers are made of nodes. A node is just a place where computation happens, loosely patterned on a neuronin the human brain, which fires when it encounters sufficient stimuli. Deep Neural Networks (DNNs) are powerful models that have achieved excellent perfo...

متن کامل

Interpretable Deep Convolutional Neural Networks via Meta-learning

Model interpretability is a requirement in many applications in which crucial decisions are made by users relying on a model’s outputs. The recent movement for “algorithmic fairness” also stipulates explainability, and therefore interpretability of learning models. And yet the most successful contemporary Machine Learning approaches, the Deep Neural Networks, produce models that are highly non-...

متن کامل

Programmatically Interpretable Reinforcement Learning

We study the problem of generating interpretable and verifiable policies through reinforcement learning. Unlike the popular Deep Reinforcement Learning (DRL) paradigm, in which the policy is represented by a neural network, the aim in Programmatically Interpretable Reinforcement Learning (PIRL) is to find a policy that can be represented in a high-level programming language. Such programmatic p...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: PRX quantum

سال: 2023

ISSN: ['2691-3399']

DOI: https://doi.org/10.1103/prxquantum.4.020338